INTRODUCTION
To allow them to perform at their best in the market, users of information systems demand ever-faster delivery of the
systems as well as more flexibility. Development methods are increasingly geared to follow changing requirements
closely, even in the midst of a project. The architectures and development environments that make all this possible are
becoming more complex and bigger in scale. This changes the development requirements.
The growing demand on the developer: to deliver the right quality, on time and right first time!
In addition to increased knowledge of development languages, methods, environments and architectures, this also calls
for deeper knowledge of quality delivery. Difficult questions in this connection are: what is the quality level
required for the client, and how can this be realised and demonstrated through testing? Individual interpretation of
the required quality and random testing provide no guarantee of eventual success. Predictable and proven quality of the
delivered software gives the project or the department the opportunity to organise the subsequent test levels, such as
the system test and the acceptance test, more efficiently. A reduction in the number of redeliveries and retests in
those test levels, in particular, delivers significant time-savings. In order to realise the higher quality of
software, increasingly high demands are placed on the development tests, and development testing is fast becoming a
mature part of the entire testing process.
Soon the time will be over that barely any requirements are being set as regards the development tests, and the
(increasingly mature) system and acceptance tests are being relied upon to rectify the lack of quality for going into
production. The resulting lengthy and costly reworking and retesting cycles have become unacceptable to most
organisations.
DEVELOPMENT TESTING EXPLAINED
This section consists of a number of subsections. These are, in sequence:
-
What is development testing?
-
Characteristic - With a focus on how they differ from system tests and acceptance tests
-
Advantages and disadvantages of improved development tests
-
Context of development testing - The influence of the development method and technical implementation
-
Unit test
-
Unit integration test
-
Quality measures - Various measures, including the concept of selected quality of development tests
-
Test tools for development tests.
What is development testing?
Development testing is understood to mean testing using knowledge of the technical implementation of the system. This
starts with the testing of the first/smallest parts of the system: routines, units, programs, modules, components,
objects, etc. Within TMap, the term ‘unit’ and therefore unit test is used exclusively in this context.
When it has been established that the most elementary parts of the system are of sufficient quality, larger parts of
the system are tested integrally during the unit integration tests. The emphasis here lies on the data throughput and
the interfacing between the units up to subsystem level.
Characteristics
A pitfall in organising development tests is the temptation to set up the test process from the viewpoint of a
system test or acceptance test. For when development tests are compared with the system test and the acceptance test, a
number of significant differences come to the fore:
-
In contrast to the system test and acceptance test, the development tests cannot be organised as an independent
process with a more or less independent team. The development tests form an integral part of software development,
and the phasing of the test activities is integrated with the activities of the developers.
-
Because development testing uses knowledge of the technical implementation of the system, other types of defects
are found than those found by system and acceptance tests. It may be expected of development tests, for example,
that each statement in the code has been touched on. A similar degree of coverage is, in practice, very difficult
for system and acceptance tests to achieve, since these test levels focus on different aspects. It is therefore
diffi cultto replace development tests with system and acceptance tests.
-
With the unit tests, in particular, the discoverer of the defects (i.e. the tester) is often the same individual
who solves them (i.e. the developer). This means that communication on the defects may be minimal.
-
The approach of development testing is that all the found defects are solved before the software is transferred.
The reporting of development testing may therefore be more restricted than that of system and acceptance testing.
-
It is the first test process, which means that all the defects are still in the product, requiring cheap and fast
defect adjustment. In order to realise this, a fl exible test environment with few procedural barriers is of great
importance.
-
Development tests are often carried out by developers themselves. The developer’s basic intention is to demonstrate
that the product works, while a tester is looking to demonstrate the difference between the required quality and
the actual quality of the product (and actively goes in search of defects). This difference in mindset means that
sizeable and/or in-depth development tests run counter to the developer’s intention and, with that, meet with
resistance and/or result in carelessly executed tests.
Advantages and disadvantages of improved development tests
In practice, development testing is often unstructured: tests are not planned or prepared; no test design techniques
are used and there is no insight into what has or has not been tested or to what depth. With that, insight is also
lacking into the quality of the (tested) product. Often during the system and acceptance tests, there are lengthy and
inefficient cycles of test/repair/retest in order to get the quality up to an acceptable level. It therefore stands to
reason that development testing should be better organised. A number of arguments are presented below as to why this
does not take place in practice (arguments against) and why it is important that it should take place (arguments
for).
Arguments against
The most important arguments as to why the need for more structure and thoroughness in development
testing is not self-evident are:
-
Pressure of time / not cost-effective - Developers are often under severe pressure of time. The priorities of the
development team are defined by the criteria by which it is judged. Assessment is usually made based on hard
criteria, such as lead-time and delivered functionality. Assessment by a much softer criterion, such as quality, is
more difficult and is therefore rare in practice. A developer who is committed to a completion date will either
communicate openly and honestly when things are not going smoothly, or give less time to his own testing if the
coding is in trouble. From the point of view of personal performance (and assessment), the latter is not
unthinkable. After all, benefits to a development team of thorough testing are relatively small, even though they
are many times greater for the project as a whole.
-
Sufficient faith in the quality - A developer is usually proud of his product and considers it to be of good
quality. It is therefore not logical as a developer to expend a lot of effort in finding fault with his own
product.
-
There will be another thorough test to follow - In the subsequent phase, e.g. the system test, a much more
intensive test will be carried out than development testing can ever do. Why should the development tester then pay
much attention to more and better testing, when it is to take place later more extensively?
Arguments for
The most important argument for more structure and thoroughness in development testing is that it enables the developer
to establish for himself that the software is of sufficient quality to be delivered to the next phase, probably the
system test. The meaning of “sufficient quality” is of course open to discussion. Below is indicated that “sufficient
quality” has many advantages for the development team:
-
Less reworking will be necessary after delivery, since the products that are delivered to the subsequent phase are
of higher quality.
-
The planning is better, since the often uncertain volume of rework declines.
-
The lead-time of the total development phase is, for the same reason, shorter.
-
Reworking as early as possible is much cheaper than at a later stage, since all the knowledge of the developed
products is still fresh in the memory, whereas by the later stage people have often already left the development
team.
-
Analysing defects you find yourself is much faster and easier than analysing defects found by others. The more
distance (both organisational and physical) the finder has, the more difficult and time-consuming the analysing
often is. Even more so, since in later phases the system is tested as a whole and the found defect may be located
in many separate components.
-
The developers get faster feedback on the mistakes they make, so that they are better able to prevent similar
mistakes in other units.
-
Certain defects, particularly on the boundaries of system functionality and underlying operating system, database
and network, can best be detected with development tests. If the development testing finds too small a proportion
of these defects, this will have consequences for the system and acceptance tests, which then have to produce a
disproportionate effort (in the detection of such defects), using inefficient techniques, in order to achieve the
same quality of the test object had the development tests been adequately executed.
These advantages apply for the total project, and even for the total life cycle of the system to a greater degree,
because the later test levels also benefit from these advantages (often even more so!), for example because much fewer
retests are necessary. Accordingly, the advantages of a more structured development test approach far outweigh the
disadvantages. However, a necessary condition for successful structuring of the development testing is that the various
parties involved, such as the client, the line and project manager and the developers, are aware of the importance of a
better test process. For example, the project manager should assess the development team much more on delivered quality
than simply on time and money. The development department may also set requirements on all the executed
development tests. Each development test in an individual project should at least meet these requirements.
Context of development testing
Development testing bears a very close relationship with the development process and cannot really be considered
separately from it. Much more knowledge of the technical implementation of the system or package is required as far as
development testing is concerned than for a system or acceptance test. In order to organise the development test well,
allowance must certainly be made for the development process used and the technical implementation.
Ensure that, as adviser or test manager in the organisation of the development testing, you have sufficient knowledge
of the development process used and the technical implementation. This will also make you a useful partner in the
dialogue with the developers, without having to be an expert.
Influence of the development method
Roughly three streams of development methods can be distinguished: waterfall, iterative and agile.
-
Waterfall, which includes the following characteristics: the development of a system in one go, phased with clear
transfer points, often a long cyclical process (including SDM).
-
Iterative, characterised by: incremental development of the system, phased with clear transfer points; short
cyclical process (iterations) (including DSDM and RUP). Iterative methods take up an intermediate position between
waterfall and agile.
-
Agile, characterised by four principles: individuals and interaction above processes and tools, working software
above extensive system documentation, user input above contract negotiation, responding to changes above following
a plan (including eXtreme Programming and SCRUM).
To discover what influence the development method has on (the organisation of) the development testing, it should be
considered to what degree the following aspects play a role:
-
Instructions for development test activities - Many methods go no further than indicating that development tests
need to be carried out. Structured guidelines are seldom supplied. Extreme Programming (XP), as one of the agile
methods, is a positive exception in this area. Three of the most important practices in development testing are
Pair Programming, Test Driven Development and Continuous Integration.
-
Quality of the test basis - The waterfall method is usually established in a formally described form. With
iterative and agile development methods, the form of the test basis is much less formal and often agreed verbally
(through consultation with users). This means that it is more difficult with iterative and agile methods to
discover all that requires to be tested. For example, the fault handling and exceptional situations (together
estimated to be as much as 80% of the code) are often under-exposed in such forms of test basis. Greater reliance
is placed on the expertise and creativity of the development testers as regards devising and executing tests for
these.
-
Long- or short-cyclical development - With short-cyclical development, proportionately more time is spent on
testing, particularly due to the need to execute a much more frequent regression test (every cycle at minimum) on
the system so far developed.
Influence of the technical implementation
Over the years, the IT world has grown into a patchwork quilt of technological solutions. To represent this simply, you
could say that the first systems were set up as monoliths, meaning that the presentation, application logic and
information storage were one giant whole. Some of these systems have been in operation for more than 30 years now. The
monolithic systems were followed by systems based on client/server architectures. Then came the 3-layer systems with
separate presentation, application logic and database layers. In parallel with this, obviously, there was the rise of
the big software packages, such as SAP, and of Internet and browser-based applications. These days, many systems are
set up in distributed fashion, which means that they consist of different, often physically dispersed, components or
services, while the system is still seen by the outside world as a cohesive whole, owing to close collaboration.
The systems were developed with the aid of a large arsenal of programming languages, whether or not object-oriented, in
development environments that support (automated) testing to a greater or lesser degree.
As indicated, testing is a risk-based activity, in which risk = chance of failure x damage, with chance of failure =
frequency of use x chance of a fault. The relevance of the above summary of 50 years of system development in one
paragraph is that the technical implementation determines to a great degree the type of faults that can be made and in
which parts the chances of faults are the greatest. The test strategy of development testing is thus strongly dependent
on the technical implementation, more so than the system and acceptance tests, where more attention is paid to the
specifications of the system and the potential damage.
TEST ACTIVITIES
The development tests, i.e. the test levels of unit test and unit integration test, form an inextricable part of the
developer’s activities. They are not organised as an independent process with an independent team. Still, a number of
different activities can be identified for the process of the unit test and unit integration test. In this section, the
activities of the development tests are described in terms of the TMap life cycle model. With this, the sequence and
dependencies become clearer and the process becomes more recognisable to the test adviser or test manager. In the TMap
life cycle model, the test activities are divided across seven phases. These phases are Planning, Control, Setting up
and maintaining infrastructure, Preparation, Specification, Execution and Completion. In practice, development-testing
activities are very much less visible and recognisable than in the system or acceptance test. The results of
activities are also not well documented. For that reason, the reading of the activities below may create an impression
of unwanted formality. This is not the intention. The descriptions are intended to provide a rounded picture of the
activities to be carried out, unmindful of the degree of formality.
|